The author created a social experiment to test how AI might handle the delicate human interaction of apologizing. Friends, family, and co-workers were tricked into participating. 作者设计了一个社会实验,来测试人工智能如何处理道歉这种微妙的人际互动。他的朋友、家人和同事被“骗”来参与这个实验。
The experiment was set up as an online game. Participants played against what they thought was another person, but was actually a computer. The game was rigged so the participant would always lose. 实验被设计成一个在线游戏。参与者以为他们在和另一个人对战,但实际上对手是一台电脑。游戏被设定为参与者必输无疑。
In the game, players answered multiple-choice questions. The "winner" of each round could either add five pretend dollars to their wallet or steal five dollars from the other player. The robot opponent always chose the nasty option: it stole their money and sent taunting messages like "haha you lose". 在游戏中,玩家回答选择题。每一轮的“赢家”可以选择给自己的虚拟钱包加5美元,或者从对方那里偷走5美元。机器人对手总是选择“刻薄”的选项:偷钱,并发来“哈哈你输了”之类的嘲讽信息。
Even when players knew they were playing against a non-sentient robot, they still felt frustrated and aggrieved. As one participant said, "I hate this damn robot." This showed that people dislike losing and being treated unfairly, even if the offender isn't real. 即使玩家们知道自己是在和一个没有感情的机器人玩,他们仍然感到沮丧和委屈。正如一位参与者所说:“我讨厌这个该死的机器人。” 这表明,人们不喜欢输,也不喜欢被不公平对待,即使冒犯者并非真人。
After the rigged game, the robot would offer one of four randomly assigned apologies. Two were written by a human expert (Ryan Fehr) and two were generated by AI (ChatGPT and Google Gemini). 在被操纵的游戏结束后,机器人会随机发出四种道歉信中的一种。其中两种由人类专家(Ryan Fehr)撰写,另外两种由人工智能(ChatGPT和谷歌Gemini)生成。
The results were complex. When asked to rate the *effectiveness* of the apologies, Human Apology B (which introduced a name, "Erin") was the clear winner. 结果很复杂。当被要求评价道歉的“有效性”时,引入了名字“艾琳”的人类道歉B是明显的赢家。
However, when it came to a final "revenge" round where players could steal money back, ChatGPT's apology was surprisingly successful. Not a single player chose to take revenge after receiving ChatGPT's apology. In contrast, 30% of players took revenge against Google's AI. 然而,在最后一轮的“复仇”环节中(玩家可以偷回钱),ChatGPT的道歉出人意料地成功。在收到ChatGPT的道歉后,没有一个玩家选择报复。相比之下,30%的玩家对谷歌AI进行了报复。
Analogy: The experiment is like a magic trick. You think you are watching someone guess a card, but the magician is using a secret method to control the outcome. The game participants thought they had a chance to win, but the system was designed for them to lose. This creates genuine frustration, which is the perfect setup to test an apology. 类比:这个实验就像一个魔术。你以为你在看某人猜牌,但魔术师其实在用秘密方法控制结果。游戏参与者认为他们有机会赢,但系统被设计成让他们必输。这会产生真实的挫败感,是测试道歉效果的完美前提。
Hover over the box below to see the hidden reality, just like in the game! 将鼠标悬停在下面的盒子上,看看隐藏的真相,就像游戏中那样!
According to psychology professor Judy Eaton, a good apology isn't just about saying the right words. It's about showing real remorse, what researchers call "psychic pain." If the apology doesn't convey a sense of vulnerability, people can detect that it's not genuine. 根据心理学教授朱迪·伊顿的说法,一个好的道歉不仅仅是说对话。它关乎于表现出真正的悔意,即研究人员所说的“心理痛苦”。如果道歉没有传达出脆弱感,人们就能察觉到它并非真心。
Sociologist Karen Cerulo's research on celebrity apologies revealed a formula. Effective apologies are often shorter, discuss the victim first, and avoid excessive explanations that can sound like justifications. 社会学家凯伦·塞鲁洛对名人道歉的研究揭示了一个公式。有效的道歉通常更简短,首先提及受害者,并避免过多的解释,因为那听起来可能像是在辩解。
Crucially, the apology must end with restitution: a promise to do better, a plan to make things right, or some form of compensation. 至关重要的是,道歉必须以补偿结尾:一个做得更好的承诺,一个纠正错误的计划,或某种形式的补偿。
Humans often struggle to apologize even when they know the formula. One key reason is pride. Admitting a mistake can feel like it lowers one's social status. 即使知道道歉的“公式”,人类也常常难以启齿。一个关键原因是骄傲。承认错误感觉像是在降低自己的社会地位。
This is where AI might have an advantage. Robots don't have pride or ego. Since apologies are somewhat formulaic, this is a task that statistical machines like AI should be able to handle well. They can follow the recipe without the emotional baggage. 这正是AI可能具有优势的地方。机器人没有骄傲或自负。由于道歉在某种程度上是公式化的,这正是像AI这样的统计机器应该能够很好处理的任务。它们可以遵循“食谱”而没有情感包袱。
Experts believe that for simple, written apologies, AI is capable of doing a passing job. ChatGPT's apology was seen as "smart" and "manipulative" because it did everything right according to the formula, even adding "Thanks for understanding" to prime the person to be understanding. 专家认为,对于简单的书面道歉,AI能够胜任。ChatGPT的道歉被认为是“聪明”和“有技巧的”,因为它完全按照公式做对了每一步,甚至加上了“感谢您的理解”来引导对方去理解。
However, AI's lack of true understanding is a weakness. As expert Xaq Pitkow notes, using AI to replace authenticity won't work. But using it as a "smart brick wall"—a tool to help you reflect and formulate your own authentic thoughts—can be very useful. A good apology should be effortful. 然而,AI缺乏真正的理解是一个弱点。正如专家Xaq Pitkow指出的,用AI替代真诚是行不通的。但是,把它当作一面“聪明的砖墙”——一个帮助你反思和组织自己真实想法的工具——可能非常有用。一个好的道歉应该是需要努力的。
Analogy: A good apology is like baking a cake. AI can follow the recipe perfectly—measuring the flour (taking responsibility), adding sugar (showing regret), and baking for the exact time (offering restitution). The result is a technically perfect cake. A human apology might be messier, but it's baked with love (authenticity), which adds a special flavor that a recipe can't capture. 类比:一个好的道歉就像烤蛋糕。AI可以完美地遵循食谱——精确地量面粉(承担责任)、加糖(表示后悔)、按规定时间烘烤(提供补偿)。成品是一个技术上完美的蛋糕。而人类的道歉可能更“凌乱”,但它是用爱(真诚)烘焙的,这增加了一种食谱无法捕捉的特殊风味。
The core ingredients of an effective apology: 一个有效道歉的核心要素:
Acknowledge Victim + Show Remorse + Take Responsibility + Offer Restitution = Effective Apology 承认受害者 + 表达悔意 + 承担责任 + 提供补偿 = 有效的道歉